556 research outputs found
A function space framework for structural total variation regularization with applications in inverse problems
In this work, we introduce a function space setting for a wide class of
structural/weighted total variation (TV) regularization methods motivated by
their applications in inverse problems. In particular, we consider a
regularizer that is the appropriate lower semi-continuous envelope (relaxation)
of a suitable total variation type functional initially defined for
sufficiently smooth functions. We study examples where this relaxation can be
expressed explicitly, and we also provide refinements for weighted total
variation for a wide range of weights. Since an integral characterization of
the relaxation in function space is, in general, not always available, we show
that, for a rather general linear inverse problems setting, instead of the
classical Tikhonov regularization problem, one can equivalently solve a
saddle-point problem where no a priori knowledge of an explicit formulation of
the structural TV functional is needed. In particular, motivated by concrete
applications, we deduce corresponding results for linear inverse problems with
norm and Poisson log-likelihood data discrepancy terms. Finally, we provide
proof-of-concept numerical examples where we solve the saddle-point problem for
weighted TV denoising as well as for MR guided PET image reconstruction
Regularization graphs—a unified framework for variational regularization of inverse problems
We introduce and study a mathematical framework for a broad class of
regularization functionals for ill-posed inverse problems: Regularization
Graphs. Regularization graphs allow to construct functionals using as building
blocks linear operators and convex functionals, assembled by means of operators
that can be seen as generalizations of classical infimal convolution operators.
This class of functionals exhaustively covers existing regularization
approaches and it is flexible enough to craft new ones in a simple and
constructive way. We provide well-posedness and convergence results with the
proposed class of functionals in a general setting. Further, we consider a
bilevel optimization approach to learn optimal weights for such regularization
graphs from training data. We demonstrate that this approach is capable of
optimizing the structure and the complexity of a regularization graph,
allowing, for example, to automatically select a combination of regularizers
that is optimal for given training data
Recommended from our members
A function space framework for structural total variation regularization with applications in inverse problems
In this work, we introduce a function space setting for a wide class of
structural/weighted total variation (TV) regularization methods motivated by
their applications in inverse problems. In particular, we consider a
regularizer that is the appropriate lower semi-continuous envelope
(relaxation) of a suitable total variation type functional initially defined
for sufficiently smooth functions. We study examples where this relaxation
can be expressed explicitly, and we also provide refinements for weighted
total variation for a wide range of weights. Since an integral
characterization of the relaxation in function space is, in general, not
always available, we show that, for a rather general linear inverse problems
setting, instead of the classical Tikhonov regularization problem, one can
equivalently solve a saddle-point problem where no a priori knowledge of an
explicit formulation of the structural TV functional is needed. In
particular, motivated by concrete applications, we deduce corresponding
results for linear inverse problems with norm and Poisson log-likelihood data
discrepancy terms. Finally, we provide proof-of-concept numerical examples
where we solve the saddle-point problem for weighted TV denoising as well as
for MR guided PET image reconstruction
Unsupervised energy disaggregation via convolutional sparse coding
In this work, a method for unsupervised energy disaggregation in private
households equipped with smart meters is proposed. This method aims to classify
power consumption as active or passive, granting the ability to report on the
residents' activity and presence without direct interaction. This lays the
foundation for applications like non-intrusive health monitoring of private
homes.
The proposed method is based on minimizing a suitable energy functional, for
which the iPALM (inertial proximal alternating linearized minimization)
algorithm is employed, demonstrating that various conditions guaranteeing
convergence are satisfied.
In order to confirm feasibility of the proposed method, experiments on
semi-synthetic test data sets and a comparison to existing, supervised methods
are provided.Comment: 9 pages, 2 figures, 3 table
A sparse optimization approach to infinite infimal convolution regularization
In this paper we introduce the class of infinite infimal convolution functionals and apply these functionals to the regularization of ill-posed inverse problems. The proposed regularization involves an infimal convolution of a continuously parametrized family of convex, positively one-homogeneous functionals defined on a common Banach space . We show that, under mild assumptions, this functional admits an equivalent convex lifting in the space of measures with values in . This reformulation allows us to prove well-posedness of a Tikhonov regularized inverse problem and opens the door to a sparse analysis of the solutions. In the case of finite-dimensional measurements we prove a representer theorem, showing that there exists a solution of the inverse problem that is sparse, in the sense that it can be represented as a linear combination of the extremal points of the ball of the lifted infinite infimal convolution functional. Then, we design a generalized conditional gradient method for computing solutions of the inverse problem without relying on an a priori discretization of the parameter space and of the Banach space . The iterates are constructed as linear combinations of the extremal points of the lifted infinite infimal convolution functional. We prove a sublinear rate of convergence for our algorithm and apply it to denoising of signals and images using, as regularizer, infinite infimal convolutions of fractional-Laplacian-type operators with adaptive orders of smoothness and anisotropies
A sparse optimization approach to infinite infimal convolution regularization
In this paper we introduce the class of infinite infimal convolution
functionals and apply these functionals to the regularization of ill-posed
inverse problems. The proposed regularization involves an infimal convolution
of a continuously parametrized family of convex, positively one-homogeneous
functionals defined on a common Banach space . We show that, under mild
assumptions, this functional admits an equivalent convex lifting in the space
of measures with values in . This reformulation allows us to prove
well-posedness of a Tikhonov regularized inverse problem and opens the door to
a sparse analysis of the solutions. In the case of finite-dimensional
measurements we prove a representer theorem, showing that there exists a
solution of the inverse problem that is sparse, in the sense that it can be
represented as a linear combination of the extremal points of the ball of the
lifted infinite infimal convolution functional. Then, we design a generalized
conditional gradient method for computing solutions of the inverse problem
without relying on an a priori discretization of the parameter space and of the
Banach space . The iterates are constructed as linear combinations of the
extremal points of the lifted infinite infimal convolution functional. We prove
a sublinear rate of convergence for our algorithm and apply it to denoising of
signals and images using, as regularizer, infinite infimal convolutions of
fractional-Laplacian-type operators with adaptive orders of smoothness and
anisotropies
- …